6 research outputs found

    Differentiable Genetic Programming

    Full text link
    We introduce the use of high order automatic differentiation, implemented via the algebra of truncated Taylor polynomials, in genetic programming. Using the Cartesian Genetic Programming encoding we obtain a high-order Taylor representation of the program output that is then used to back-propagate errors during learning. The resulting machine learning framework is called differentiable Cartesian Genetic Programming (dCGP). In the context of symbolic regression, dCGP offers a new approach to the long unsolved problem of constant representation in GP expressions. On several problems of increasing complexity we find that dCGP is able to find the exact form of the symbolic expression as well as the constants values. We also demonstrate the use of dCGP to solve a large class of differential equations and to find prime integrals of dynamical systems, presenting, in both cases, results that confirm the efficacy of our approach

    Speech Recognition for the iCub Platform

    Get PDF
    This paper describes open source software (available at https://github.com/robotology/natural- speech) to build automatic speech recognition (ASR) systems and run them within the YARP platform. The toolkit is designed (i) to allow non-ASR experts to easily create their own ASR system and run it on iCub, and (ii) to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human-iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab) is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: "articulatory" and "unsupervised" speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning- based baselines. The second type of recognition systems, the "unsupervised" systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems). To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2,5-hours speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates

    Discovering discrete subword units with binarized autoencoders and hidden-Markov-model encoders

    No full text
    In this paper we address the problem of unsupervised learning of discrete subword units. Our approach is based on Deep Autoencoders (AEs), whose encoding node values are thresholded to subsequently generate a symbolic, i.e., 1-of-K (with K = No. of subwords), representation of each speech frame. We experiment with two variants of the standard AE which we have named Binarized Autoencoder and Hidden-Markov-Model Encoder. The first forces the binary encoding nodes to have a Ushaped distribution (with peaks at 0 and 1) while minimizing the reconstruction error. The latter jointly learns the symbolic encoding representation (i.e., subwords) and the prior and transition distribution probabilities of the learned subwords. The ABX evaluation of the Zero Resource Challenge - Track 1 shows that a deep AE with only 6 encoding nodes, which assigns to each frame a 1-of-K binary vector with K = 26, can outperform real-valued MFCC representations in the acrossspeaker setting. Binarized AEs can outperform standard AEs when using a larger number of encoding nodes, while HMM Encoders may allow more compact subword transcriptions without worsening the ABX performance

    Machine learning of optimal low-thrust transfers between near-earth objects

    No full text
    During the initial phase of space trajectory planning and optimization, it is common to have to solve large dimensional global optimization problems. In particular continuous low-thrust propulsion is computationally very intensive to obtain optimal solutions. In this work, we investigate the application of machine learning regressors to estimate the final spacecraft mass mf after an optimal low-thrust transfer between two Near Earth Objects instead of solving the corresponding optimal control problem (OCP). Such low thrust transfers are of interest for several space missions currently being developed such as NASA’s NEA Scout. Previous work has shown machine learning to greatly improve the estimation accuracy in the case of short transfers within the main asteroid belt. We extend this work to cover also the more complicated case of multiple-revolution transfers in the near Earth regime. In the process, we reduce the general OCP of solving for mf to a much simpler OCP of determining the maximum initial spacecraft mass m∗ for which the transfer is feasible. This information, along with readily available information on the orbit geometries, is sufficient to learn the final mass mf for the same transfer starting with any initial mass mi. This results in a significant reduction of the computational cost compared to solving the full OCP.</p

    esa/pykep: Upgrade to pygmo 2.0 and more

    No full text
    Updating to pygmo 2 (and dropping PyGMO) Adding the Pontryagin module Bug fixes on planet ephs (affecting low inclinations and eccentricites) Implementation of modified equinoctial parameters Adding more examples Documentation fixes Clang format and pep8 enforced Some API improvements (more kwargs) name changed to pykep (not PyKEP
    corecore